Peter Richtarik

Sci-Café: What's the Value of Communicating Your Research?

GECCO2021 - pap244 - GA - Analysis of Evolutionary Diversity Optimisation for Permutation Problems

20170912 - Domain-Specific Languages for Convex Optimization

A Hyperfast Second-order Method for Distributed Convex Optimization

Mathurin Massias: Coordinate Descent for SLOPE

Succinct Non-Interactive Secure Computation

When the Optimum is also Blind: A New Perspective on Universal Optimization

Federated Learning with Peter Kairouz

NeurIPS 2019 – PowerSGD: Practical low-rank gradient compression for distributed optimization

LT 「Stochastic Proximal Gradient Descent with Acceleration Techniques」- #DSIRNLP 06

1W-MINDS: Zaiwen Wen, Feb 25, 2021, Stochastic Second-Order Methods For Deep Learning

FLOW Seminar #35: Eduard Gorbunov (MIPT) Faster Non-Convex Distributed Learning with Compression

Dr Armin Eftekhari, University of Edinburgh

Variance Reduction is an Antidote to Byzantines: Better Rates, Weaker Assumptions and Communicati...

Lecture 27: Dual Averaging

Calculus for non-Differentiable Functions? | Introduction to Machine Learning | CS771

Primal--Dual Optimization and Application to Sampling

EPSRC Council Open Forum 15 October 2013

Hedy Attouch: Lecture 2 on Dynamical Systems and Optimization

Vamsi K. Potluru - Coordinate Descent for mixed-norm NMF

signSGD: compressed optimisation

Professor Mihaela van der Schaar, Oxford University

OWOS:Minh N. Dao-'The Proximal Subgradient Method for Nonsmooth Sum-of-Ratios Optimization Problems'

Sinogram upsampling using Primal-Dual UNet for undersampled CT and radial MRI reconstruction

join shbcf.ru